763 research outputs found

    Adaptive Regularization for Nonconvex Optimization Using Inexact Function Values and Randomly Perturbed Derivatives

    Get PDF
    A regularization algorithm allowing random noise in derivatives and inexact function values is proposed for computing approximate local critical points of any order for smooth unconstrained optimization problems. For an objective function with Lipschitz continuous pp-th derivative and given an arbitrary optimality order qpq \leq p, it is shown that this algorithm will, in expectation, compute such a point in at most O((minj{1,,q}ϵj)p+1pq+1)O\left(\left(\min_{j\in\{1,\ldots,q\}}\epsilon_j\right)^{-\frac{p+1}{p-q+1}}\right) inexact evaluations of ff and its derivatives whenever q{1,2}q\in\{1,2\}, where ϵj\epsilon_j is the tolerance for jjth order accuracy. This bound becomes at most O((minj{1,,q}ϵj)q(p+1)p)O\left(\left(\min_{j\in\{1,\ldots,q\}}\epsilon_j\right)^{-\frac{q(p+1)}{p}}\right) inexact evaluations if q>2q>2 and all derivatives are Lipschitz continuous. Moreover these bounds are sharp in the order of the accuracy tolerances. An extension to convexly constrained problems is also outlined.Comment: 22 page

    Adaptive Regularization Algorithms with Inexact Evaluations for Nonconvex Optimization

    Get PDF
    A regularization algorithm using inexact function values and inexact derivatives is proposed and its evaluation complexity analyzed. This algorithm is applicable to unconstrained problems and to problems with inexpensive constraints (that is constraints whose evaluation and enforcement has negligible cost) under the assumption that the derivative of highest degree is β\beta-H\"{o}lder continuous. It features a very flexible adaptive mechanism for determining the inexactness which is allowed, at each iteration, when computing objective function values and derivatives. The complexity analysis covers arbitrary optimality order and arbitrary degree of available approximate derivatives. It extends results of Cartis, Gould and Toint (2018) on the evaluation complexity to the inexact case: if a qqth order minimizer is sought using approximations to the first pp derivatives, it is proved that a suitable approximate minimizer within ϵ\epsilon is computed by the proposed algorithm in at most O(ϵp+βpq+β)O(\epsilon^{-\frac{p+\beta}{p-q+\beta}}) iterations and at most O(log(ϵ)ϵp+βpq+β)O(|\log(\epsilon)|\epsilon^{-\frac{p+\beta}{p-q+\beta}}) approximate evaluations. An algorithmic variant, although more rigid in practice, can be proved to find such an approximate minimizer in O(log(ϵ)+ϵp+βpq+β)O(|\log(\epsilon)|+\epsilon^{-\frac{p+\beta}{p-q+\beta}}) evaluations.While the proposed framework remains so far conceptual for high degrees and orders, it is shown to yield simple and computationally realistic inexact methods when specialized to the unconstrained and bound-constrained first- and second-order cases. The deterministic complexity results are finally extended to the stochastic context, yielding adaptive sample-size rules for subsampling methods typical of machine learning.Comment: 32 page

    Updating constraint preconditioners for KKT systems in quadratic programming via low-rank corrections

    Get PDF
    This work focuses on the iterative solution of sequences of KKT linear systems arising in interior point methods applied to large convex quadratic programming problems. This task is the computational core of the interior point procedure and an efficient preconditioning strategy is crucial for the efficiency of the overall method. Constraint preconditioners are very effective in this context; nevertheless, their computation may be very expensive for large-scale problems, and resorting to approximations of them may be convenient. Here we propose a procedure for building inexact constraint preconditioners by updating a "seed" constraint preconditioner computed for a KKT matrix at a previous interior point iteration. These updates are obtained through low-rank corrections of the Schur complement of the (1,1) block of the seed preconditioner. The updated preconditioners are analyzed both theoretically and computationally. The results obtained show that our updating procedure, coupled with an adaptive strategy for determining whether to reinitialize or update the preconditioner, can enhance the performance of interior point methods on large problems.Comment: 22 page

    A matrix-free preconditioner for sparse symmetric positive definite systems and least-squares problems

    Get PDF

    The strange case of negative reflection

    Get PDF
    In this paper, we show the phenomenon of negative reflection occurring in a mechanical phononic structure, consisting of a grating of fixed inclusions embedded in a linear elastic matrix. The negative reflection is not due to the introduction of a subwavelength metastructure or materials with negative mechanical properties. Numerical analyses for out-of-plane shear waves demonstrate that there exist frequencies at which most of the incident energy is reflected at negative angles. The effect is symmetric with respect to a line that is not parallel to the normal direction to the grating structure. Simulations at different angles of incidence and computations of the energy fluxes show that negative reflection is achievable in a wide range of loading conditions

    Solving Nonlinear Systems of Equations Via Spectral Residual Methods: Stepsize Selection and Applications

    Get PDF
    Spectral residual methods are derivative-free and low-cost per iteration procedures for solving nonlinear systems of equations. They are generally coupled with a nonmonotone linesearch strategy and compare well with Newton-based methods for large nonlinear systems and sequences of nonlinear systems. The residual vector is used as the search direction and choosing the steplength has a crucial impact on the performance. In this work we address both theoretically and experimentally the steplength selection and provide results on a real application such as a rolling contact problem

    An optimally fast objective-function-free minimization algorithm using random subspaces

    Full text link
    An algorithm for unconstrained non-convex optimization is described, which does not evaluate the objective function and in which minimization is carried out, at each iteration, within a randomly selected subspace. It is shown that this random approximation technique does not affect the method's convergence nor its evaluation complexity for the search of an ϵ\epsilon-approximate first-order critical point, which is O(ϵ(p+1)/p)\mathcal{O}(\epsilon^{-(p+1)/p}), where pp is the order of derivatives used. A variant of the algorithm using approximate Hessian matrices is also analyzed and shown to require at most O(ϵ2)\mathcal{O}(\epsilon^{-2}) evaluations. Preliminary numerical tests show that the random-subspace technique can significantly improve performance on some problems, albeit, unsurprisingly, not for all.Comment: 23 page

    Interfacial cracks in bi-material solids: Stroh formalism and skew-symmetric weight functions

    Get PDF
    A new general approach for deriving the weight functions for 2D interfacial cracks in anisotropic bimaterials has been developed.For perfect interface conditions, the new method avoid the use of Wiener-Hopf technique and the challenging factorization problem connected. Both symmetric and skew-symmetric weight functions can be derived by means of the new approach. Weight functions can be used for deriving singular integral formulation of interfacial cracks in anisotropic media. The proposed method can be applied for studying interfacial cracks problems in many materials:monoclinic, orthotropic, cubic, piezoelectrics, poroelastics, quasicrystals
    corecore